196 research outputs found

    Representational Similarity Analysis – Connecting the Branches of Systems Neuroscience

    Get PDF
    A fundamental challenge for systems neuroscience is to quantitatively relate its three major branches of research: brain-activity measurement, behavioral measurement, and computational modeling. Using measured brain-activity patterns to evaluate computational network models is complicated by the need to define the correspondency between the units of the model and the channels of the brain-activity data, e.g., single-cell recordings or voxels from functional magnetic resonance imaging (fMRI). Similar correspondency problems complicate relating activity patterns between different modalities of brain-activity measurement (e.g., fMRI and invasive or scalp electrophysiology), and between subjects and species. In order to bridge these divides, we suggest abstracting from the activity patterns themselves and computing representational dissimilarity matrices (RDMs), which characterize the information carried by a given representation in a brain or model. Building on a rich psychological and mathematical literature on similarity analysis, we propose a new experimental and data-analytical framework called representational similarity analysis (RSA), in which multi-channel measures of neural activity are quantitatively related to each other and to computational theory and behavior by comparing RDMs. We demonstrate RSA by relating representations of visual objects as measured with fMRI in early visual cortex and the fusiform face area to computational models spanning a wide range of complexities. The RDMs are simultaneously related via second-level application of multidimensional scaling and tested using randomization and bootstrap techniques. We discuss the broad potential of RSA, including novel approaches to experimental design, and argue that these ideas, which have deep roots in psychology and neuroscience, will allow the integrated quantitative analysis of data from all three branches, thus contributing to a more unified systems neuroscience

    Complementary Roles of Systems Representing Sensory Evidence and Systems Detecting Task Difficulty During Perceptual Decision Making

    Get PDF
    Perceptual decision making is a multi-stage process where incoming sensory information is used to select one option from several alternatives. Researchers typically have adopted one of two conceptual frameworks to define the criteria for determining whether a brain region is involved in decision computations. One framework, building on single-unit recordings in monkeys, posits that activity in a region involved in decision making reflects the accumulation of evidence toward a decision threshold, thus showing the lowest level of BOLD signal during the hardest decisions. The other framework instead posits that activity in a decision-making region reflects the difficulty of a decision, thus showing the highest level of BOLD signal during the hardest decisions. We had subjects perform a face detection task on degraded face images while we simultaneously recorded BOLD activity. We searched for brain regions where changes in BOLD activity during this task supported either of these frameworks by calculating the correlation of BOLD activity with reaction time – a measure of task difficulty. We found that the right supplementary eye field, right frontal eye field, and right inferior frontal gyrus had increased activity relative to baseline that positively correlated with reaction time, while the left superior frontal sulcus and left middle temporal gyrus had decreased activity relative to baseline that negatively correlated with reaction time. We propose that a simple mechanism that scales a region's activity based on task demands can explain our results

    Introducing alternative-based thresholding for defining functional regions of interest in fMRI

    Get PDF
    In fMRI research, one often aims to examine activation in specific functional regions of interest (fROIs). Current statistical methods tend to localize fROIs inconsistently, focusing on avoiding detection of false activation. Not missing true activation is however equally important in this context. In this study, we explored the potential of an alternative-based thresholding (ABT) procedure, where evidence against the null hypothesis of no effect and evidence against a prespecified alternative hypothesis is measured to control both false positives and false negatives directly. The procedure was validated in the context of localizer tasks on simulated brain images and using a real data set of 100 runs per subject. Voxels categorized as active with ABT can be confidently included in the definition of the fROI, while inactive voxels can be confidently excluded. Additionally, the ABT method complements classic null hypothesis significance testing with valuable information by making a distinction between voxels that show evidence against both the null and alternative and voxels for which the alternative hypothesis cannot be rejected despite lack of evidence against the null

    Variance decomposition for single-subject task-based fMRI activity estimates across many sessions

    Get PDF
    AbstractHere we report an exploratory within-subject variance decomposition analysis conducted on a task-based fMRI dataset with an unusually large number of repeated measures (i.e., 500 trials in each of three different subjects) distributed across 100 functional scans and 9 to 10 different sessions. Within-subject variance was segregated into four primary components: variance across-sessions, variance across-runs within a session, variance across-blocks within a run, and residual measurement/modeling error. Our results reveal inhomogeneous and distinct spatial distributions of these variance components across significantly active voxels in grey matter. Measurement error is dominant across the whole brain. Detailed evaluation of the remaining three components shows that across-session variance is the second largest contributor to total variance in occipital cortex, while across-runs variance is the second dominant source for the rest of the brain. Network-specific analysis revealed that across-block variance contributes more to total variance in higher-order cognitive networks than in somatosensory cortex. Moreover, in some higher-order cognitive networks across-block variance can exceed across-session variance. These results help us better understand the temporal (i.e., across blocks, runs and sessions) and spatial distributions (i.e., across different networks) of within-subject natural variability in estimates of task responses in fMRI. They also suggest that different brain regions will show different natural levels of test-retest reliability even in the absence of residual artifacts and sufficiently high contrast-to-noise measurements. Further confirmation with a larger sample of subjects and other tasks is necessary to ensure generality of these results

    Knowing what you know in brain segmentation using Bayesian deep neural networks

    Full text link
    In this paper, we describe a Bayesian deep neural network (DNN) for predicting FreeSurfer segmentations of structural MRI volumes, in minutes rather than hours. The network was trained and evaluated on a large dataset (n = 11,480), obtained by combining data from more than a hundred different sites, and also evaluated on another completely held-out dataset (n = 418). The network was trained using a novel spike-and-slab dropout-based variational inference approach. We show that, on these datasets, the proposed Bayesian DNN outperforms previously proposed methods, in terms of the similarity between the segmentation predictions and the FreeSurfer labels, and the usefulness of the estimate uncertainty of these predictions. In particular, we demonstrated that the prediction uncertainty of this network at each voxel is a good indicator of whether the network has made an error and that the uncertainty across the whole brain can predict the manual quality control ratings of a scan. The proposed Bayesian DNN method should be applicable to any new network architecture for addressing the segmentation problem.Comment: Submitted to Frontiers in Neuroinformatic

    The art and science of using quality control to understand and improve fMRI data

    Get PDF
    Designing and executing a good quality control (QC) process is vital to robust and reproducible science and is often taught through hands on training. As FMRI research trends toward studies with larger sample sizes and highly automated processing pipelines, the people who analyze data are often distinct from those who collect and preprocess the data. While there are good reasons for this trend, it also means that important information about how data were acquired, and their quality, may be missed by those working at later stages of these workflows. Similarly, an abundance of publicly available datasets, where people (not always correctly) assume others already validated data quality, makes it easier for trainees to advance in the field without learning how to identify problematic data. This manuscript is designed as an introduction for researchers who are already familiar with fMRI, but who did not get hands on QC training or who want to think more deeply about QC. This could be someone who has analyzed fMRI data but is planning to personally acquire data for the first time, or someone who regularly uses openly shared data and wants to learn how to better assess data quality. We describe why good QC processes are important, explain key priorities and steps for fMRI QC, and as part of the FMRI Open QC Project, we demonstrate some of these steps by using AFNI software and AFNI’s QC reports on an openly shared dataset. A good QC process is context dependent and should address whether data have the potential to answer a scientific question, whether any variation in the data has the potential to skew or hide key results, and whether any problems can potentially be addressed through changes in acquisition or data processing. Automated metrics are essential and can often highlight a possible problem, but human interpretation at every stage of a study is vital for understanding causes and potential solutions

    Different activation signatures in the primary sensorimotor and higher-level regions for haptic three-dimensional curved surface exploration

    Get PDF
    Haptic object perception begins with continuous exploratory contact, and the human brain needs to accumulate sensory information continuously over time. However, it is still unclear how the primary sensorimotor cortex (PSC) interacts with these higher-level regions during haptic exploration over time. This functional magnetic resonance imaging (fMRI) study investigates time-dependent haptic object processing by examining brain activity during haptic 3D curve and roughness estimations. For this experiment, we designed sixteen haptic stimuli (4 kinds of curves x 4 varieties of roughness) for the haptic curve and roughness estimation tasks. Twenty participants were asked to move their right index and middle fingers along the surface twice and to estimate one of the two features -roughness or curvature -depending on the task instruction. We found that the brain activity in several higher-level regions (e.g., the bilateral posterior parietal cortex) linearly increased as the number of curves increased during the haptic exploration phase. Surprisingly, we found that the contralateral PSC was parametrically modulated by the number of curves only during the late exploration phase but not during the early exploration phase. In contrast, we found no similar parametric modulation activity patterns during the haptic roughness estimation task in either the contralateral PSC or in higher-level regions. Thus, our findings suggest that haptic 3D object perception is processed across the cortical hierarchy, whereas the contralateral PSC interacts with other higher-level regions across time in a manner that is dependent upon the features of the object

    Layer-specific activation of sensory input and predictive feedback in the human primary somatosensory cortex

    Get PDF
    When humans perceive a sensation, their brains integrate inputs from sensory receptors and process them based on their expectations. The mechanisms of this predictive coding in the human somatosensory system are not fully understood. We fill a basic gap in our understanding of the predictive processing of somatosensation by examining the layer-specific activity in sensory input and predictive feedback in the human primary somatosensory cortex (S1). We acquired submillimeter functional magnetic resonance imaging data at 7T (n = 10) during a task of perceived, predictable, and unpredictable touching sequences. We demonstrate that the sensory input from thalamic projects preferentially activates the middle layer, while the superficial and deep layers in S1 are more engaged for cortico-cortical predictive feedback input. These findings are pivotal to understanding the mechanisms of tactile prediction processing in the human somatosensory cortex
    corecore